📚 node [[inter rater_agreement|inter rater agreement]]
Welcome! Nobody has contributed anything to 'inter rater_agreement|inter rater agreement' yet. You can:
-
Write something in the document below!
- There is at least one public document in every node in the Agora. Whatever you write in it will be integrated and made available for the next visitor to read and edit.
- Write to the Agora from social media.
-
Sign up as a full Agora user.
- As a full user you will be able to contribute your personal notes and resources directly to this knowledge commons. Some setup required :)
⥅ related node [[inter rater_agreement]]
⥅ node [[inter-rater_agreement]] pulled by Agora
📓
garden/KGBicheno/Artificial Intelligence/Introduction to AI/Week 3 - Introduction/Definitions/Inter-Rater_Agreement.md by @KGBicheno
inter-rater agreement
Go back to the [[AI Glossary]]
A measurement of how often human raters agree when doing a task. If raters disagree, the task instructions may need to be improved. Also sometimes called inter-annotator agreement or inter-rater reliability. See also Cohen's kappa, which is one of the most popular inter-rater agreement measurements.
📖 stoas
- public document at doc.anagora.org/inter-rater_agreement|inter-rater-agreement
- video call at meet.jit.si/inter-rater_agreement|inter-rater-agreement
🔎 full text search for 'inter rater_agreement|inter rater agreement'